5,819 research outputs found

    Robotic control of the seven-degree-of-freedom NASA laboratory telerobotic manipulator

    Get PDF
    A computationally efficient robotic control scheme for the NASA Laboratory Telerobotic Manipulator (LTM) is presented. This scheme utilizes the redundancy of the seven-degree-of-freedom LTM to avoid joint limits and singularities. An analysis to determine singular configurations is presented. Performance criteria are determined based on the joint limits and singularity analysis. The control scheme is developed in the framework of resolved rate control using the gradient projection method, and it does not require the generalized inverse of the Jacobian. An efficient formulation for determining the joint velocities of the LTM is obtained. This control scheme is well suited for real-time implementation, which is essential if the end-effector trajectory is continuously modified based on sensory feedback. Implementation of this scheme on a Motorola 68020 VME bus-based controller of the LTM is in progress. Simulation results demonstrating the redundancy utilization in the robotic mode are presented

    Solar System Processes Underlying Planetary Formation, Geodynamics, and the Georeactor

    Full text link
    Only three processes, operant during the formation of the Solar System, are responsible for the diversity of matter in the Solar System and are directly responsible for planetary internal-structures, including planetocentric nuclear fission reactors, and for dynamical processes, including and especially, geodynamics. These processes are: (i) Low-pressure, low-temperature condensation from solar matter in the remote reaches of the Solar System or in the interstellar medium; (ii) High-pressure, high-temperature condensation from solar matter associated with planetary-formation by raining out from the interiors of giant-gaseous protoplanets, and; (iii) Stripping of the primordial volatile components from the inner portion of the Solar System by super-intense solar wind associated with T-Tauri phase mass-ejections, presumably during the thermonuclear ignition of the Sun. As described herein, these processes lead logically, in a causally related manner, to a coherent vision of planetary formation with profound implications including, but not limited to, (a) Earth formation as a giant gaseous Jupiter-like planet with vast amounts of stored energy of protoplanetary compression in its rock-plus-alloy kernel; (b) Removal of approximately 300 Earth-masses of primordial gases from the Earth, which began Earth's decompression process, making available the stored energy of protoplanetary compression for driving geodynamic processes, which I have described by the new whole-Earth decompression dynamics and which is responsible for emplacing heat at the mantle-crust-interface at the base of the crust through the process I have described, called mantle decompression thermal-tsunami; and, (c)Uranium accumulations at the planetary centers capable of self-sustained nuclear fission chain reactions.Comment: Invited paper for the Special Issue of Earth, Moon and Planets entitled Neutrino Geophysics Added final corrections for publicatio

    The laboratory telerobotic manipulator program

    Get PDF
    New opportunities for the application of telerobotic systems to enhance human intelligence and dexterity in the hazardous environment of space are presented by the NASA Space Station Program. Because of the need for significant increases in extravehicular activity and the potential increase in hazards associated with space programs, emphasis is being heightened on telerobotic systems research and development. The Laboratory Telerobotic Manipulator (LTM) program is performed to develop and demonstrate ground-based telerobotic manipulator system hardware for research and demonstrations aimed at future NASA applications. The LTM incorporates traction drives, modularity, redundant kinematics, and state-of-the-art hierarchical control techniques to form a basis for merging the diverse technological domains of robust, high-dexterity teleoperations and autonomous robotic operation into common hardware to further NASA's research

    Inferential considerations for low-count RNA-seq transcripts: a case study on the dominant prairie grass Andropogon gerardii

    Get PDF
    Citation: Raithel, S., Johnson, L., Galliart, M., Brown, S., Shelton, J., Herndon, N., & Bello, N. M. (2016). Inferential considerations for low-count RNA-seq transcripts: a case study on the dominant prairie grass Andropogon gerardii. Bmc Genomics, 17, 16. doi:10.1186/s12864-016-2442-7Background: Differential expression (DE) analysis of RNA-seq data still poses inferential challenges, such as handling of transcripts characterized by low expression levels. In this study, we use a plasmode-based approach to assess the relative performance of alternative inferential strategies on RNA-seq transcripts, with special emphasis on transcripts characterized by a small number of read counts, so-called low-count transcripts, as motivated by an ecological application in prairie grasses. Big bluestem (Andropogon gerardii) is a wide-ranging dominant prairie grass of ecological and agricultural importance to the US Midwest while edaphic subspecies sand bluestem (A. gerardii ssp. Hallii) grows exclusively on sand dunes. Relative to big bluestem, sand bluestem exhibits qualitative phenotypic divergence consistent with enhanced drought tolerance, plausibly associated with transcripts of low expression levels. Our dataset consists of RNA-seq read counts for 25,582 transcripts (60 % of which are classified as low-count) collected from leaf tissue of individual plants of big bluestem (n = 4) and sand bluestem (n = 4). Focused on low-count transcripts, we compare alternative ad-hoc data filtering techniques commonly used in RNA-seq pipelines and assess the inferential performance of recently developed statistical methods for DE analysis, namely DESeq2 and edgeR robust. These methods attempt to overcome the inherently noisy behavior of low-count transcripts by either shrinkage or differential weighting of observations, respectively. Results: Both DE methods seemed to properly control family-wise type 1 error on low-count transcripts, whereas edgeR robust showed greater power and DESeq2 showed greater precision and accuracy. However, specification of the degree of freedom parameter under edgeR robust had a non-trivial impact on inference and should be handled carefully. When properly specified, both DE methods showed overall promising inferential performance on low-count transcripts, suggesting that ad-hoc data filtering steps at arbitrary expression thresholds may be unnecessary. A note of caution is in order regarding the approximate nature of DE tests under both methods. Conclusions: Practical recommendations for DE inference are provided when low-count RNA-seq transcripts are of interest, as is the case in the comparison of subspecies of bluestem grasses. Insights from this study may also be relevant to other applications focused on transcripts of low expression levels

    Heat flow of the Earth and resonant capture of solar 57-Fe axions

    Full text link
    In a very conservative approach, supposing that total heat flow of the Earth is exclusively due to resonant capture inside the Earth of axions, emitted by 57-Fe nuclei on Sun, we obtain limit on mass of hadronic axion: m_a<1.8 keV. Taking into account release of heat from decays of 40-K, 232-Th, 238-U inside the Earth, this estimation could be improved to the value: m_a<1.6 keV. Both the values are less restrictive than limits set in devoted experiments to search for 57-Fe axions (m_a<216-745 eV), but are much better than limits obtained in experiments with 83-Kr (m_a<5.5 keV) and 7-Li (m_a<13.9-32 keV).Comment: 8 page

    A geoneutrino experiment at Homestake

    Get PDF
    A significant fraction of the 44TW of heat dissipation from the Earth's interior is believed to originate from the decays of terrestrial uranium and thorium. The only estimates of this radiogenic heat, which is the driving force for mantle convection, come from Earth models based on meteorites, and have large systematic errors. The detection of electron antineutrinos produced by these uranium and thorium decays would allow a more direct measure of the total uranium and thorium content, and hence radiogenic heat production in the Earth. We discuss the prospect of building an electron antineutrino detector approximately 700m^3 in size in the Homestake mine at the 4850' level. This would allow us to make a measurement of the total uranium and thorium content with a statistical error less than the systematic error from our current knowledge of neutrino oscillation parameters. It would also allow us to test the hypothesis of a naturally occurring nuclear reactor at the center of the Earth.Comment: proceedings for Neutrino Sciences 2005, submitted to Earth, Moon, and Planet

    Atmospheric CH4 and N2O measurements near Greater Houston area landfills using a QCL-based QEPAS sensor system during DISCOVER-AQ 2013

    Get PDF
    A quartz-enhanced photoacoustic absorption spectroscopy (QEPAS)-based gas sensor was developed for methane (CH4) and nitrous-oxide (N 2O) detection. The QEPAS-based sensor was installed in a mobile laboratory operated by Aerodyne Research, Inc. to perform atmospheric CH 4 and N2O detection around two urban waste-disposal sites located in the northeastern part of the Greater Houston area, during DISCOVER-AQ, a NASA Earth Venture during September 2013. A continuous wave, thermoelectrically cooled, 158 mW distributed feedback quantum cascade laser emitting at 7.83 μm was used as the excitation source in the QEPAS gas sensor system. Compared to typical ambient atmospheric mixing ratios of CH4 and N2O of 1.8 ppmv and 323 ppbv, respectively, significant increases in mixing ratios were observed when the mobile laboratory was circling two waste-disposal sites in Harris County and when waste disposal trucks were encountered. © 2014 Optical Society of America

    Particulate polycyclic aromatic hydrocarbon spatial variability and aging in Mexico City

    No full text
    International audienceAs part of the Megacities Initiative: Local and Global Research Observations (MILAGRO) study in the Mexico City Metropolitan Area in March 2006, we measured particulate polycyclic aromatic hydrocarbons (PAHs) and other gaseous species and particulate properties at six locations throughout the city. The measurements were intended to support the following objectives: to describe spatial and temporal patterns in PAH concentrations, to gain insight into sources and transformations of PAHs, and to quantify the relationships between PAHs and other pollutants. Total particulate PAHs at the Instituto Mexicano del Petróleo (T0 supersite) located near downtown averaged 50 ng m?3, and aerosol active surface area averaged 80 mm2 m?3. PAHs were also measured on board the Aerodyne Mobile Laboratory, which visited six sites encompassing a mixture of different land uses and a range of ages of air parcels transported from the city core. Weak intersite correlations suggest that local sources are important and variable and that exposure to PAHs cannot be represented by a single regional-scale value. The relationships between PAHs and other pollutants suggest that a variety of sources and ages of particles are present. Among carbon monoxide, nitrogen oxides (NOx), and carbon dioxide, particulate PAHs are most strongly correlated with NOx. Mexico City's PAH-to-black carbon mass ratio of 0.01 is similar to that found on a freeway loop in the Los Angeles area and approximately 8?30 times higher than that found in other cities. Ratios also indicate that primary combustion particles are rapidly coated by secondary aerosol in Mexico City. If so, the lifetime of PAHs may be prolonged if the coating protects them against photodegradation or heterogeneous reactions

    Operational experience, improvements, and performance of the CDF Run II silicon vertex detector

    Full text link
    The Collider Detector at Fermilab (CDF) pursues a broad physics program at Fermilab's Tevatron collider. Between Run II commissioning in early 2001 and the end of operations in September 2011, the Tevatron delivered 12 fb-1 of integrated luminosity of p-pbar collisions at sqrt(s)=1.96 TeV. Many physics analyses undertaken by CDF require heavy flavor tagging with large charged particle tracking acceptance. To realize these goals, in 2001 CDF installed eight layers of silicon microstrip detectors around its interaction region. These detectors were designed for 2--5 years of operation, radiation doses up to 2 Mrad (0.02 Gy), and were expected to be replaced in 2004. The sensors were not replaced, and the Tevatron run was extended for several years beyond its design, exposing the sensors and electronics to much higher radiation doses than anticipated. In this paper we describe the operational challenges encountered over the past 10 years of running the CDF silicon detectors, the preventive measures undertaken, and the improvements made along the way to ensure their optimal performance for collecting high quality physics data. In addition, we describe the quantities and methods used to monitor radiation damage in the sensors for optimal performance and summarize the detector performance quantities important to CDF's physics program, including vertex resolution, heavy flavor tagging, and silicon vertex trigger performance.Comment: Preprint accepted for publication in Nuclear Instruments and Methods A (07/31/2013
    corecore